Neural representations for sensory-motor control, II: Learning a head-centered visuomotor representation of 3-D target position

نویسندگان

  • Stephen Grossberg
  • Frank H. Guenther
  • Daniel Bullock
  • Douglas N. Greve
چکیده

-A neural network model is described for how an invariant head-centered representation of 3-D target position can be ataonomously learned by the brain in real time. Once learned, such a target representation may be used to control both eye and limb movements. Tile target representation is derived from the positions of both eyes in the head. and the locations which the target activates on the retinas of both eyes. A Vector Associative Map (VAM) learns the many-to-one transformation from multiple combinations of eye-and-retinal position to invariant 3-D target position. Eye position is derived from outflow movement signals to the t:l'e muscles. Two successive stages o f opponent processing convert these corollary discharges into a head-centered representation that closely approximates the azimuth, elevation, and vergence o f the eyes' gaze position with respect to a cyclopean origin located between the eyes. I.f4M learning combines this cyclopean representation of present gaze position with binocular retinal information about target position into an invariant representation of 3-D target position with respect to the head. VAM learning can use a teaching vector that is externally derived from the positions o f the eyes when they foveate the target. A VAM can also autonomously discover and learn the invariant representation, without an explicit teacher, by generating internal error signals from enviromnental fluctuations in which these invariant properties are implicit. VAM error signals are computed b), Difference Vectors ( DVs) that are zeroed by the VAM learning process. VAMs may be organized into I,'AM Cascades lbr learning and performing both sensory-to-spatial maps and spatial-to-motor maps. These multiple uses clarif|' why D V-type properties art, computed by cells in the parietal, frontal, and motor cortices of many mammals. I,'AMs are modulated by gating signals that express different aspects o f the will-to-act. These signals transform a single invariant representation into movements o f d~fferent speed (GO signal) and size ( GRO signal), and thereby enable I ~ M controllers to match a plamwd action sequence to variable environmental conditions. Keywords--Neural networks, Sensory-motor control, Spatial representation, Learning, Vector associative map, Gaze, Motor plan. 1. SPATIAL R E P R E S E N T A T I O N S FOR T H E N E U R A L C O N T R O L O F FLEXIBLE M O V E M E N T S This paper introduces a neural network model o f how the brain learns spatial representations with which to control sensory-guided and memory-guided eye and Acknowledgements: The authors wish to thank Kelly A. Dumont and Carol Y. Jefferson for their valuable assistance in the preparation of the manuscript. * Supported in part by the National Science Foundation (NSF IRI-87-16960 and NSF IRI-90-24877) and the Otfice of Naval Research (ONR N00014-92-J1309 ). t Supported in part by National Science Foundation (NSF IRI87-16960 and NSF IRI-90-24877 ). Requests for reprints should be sent to Stephen Grossberg, Center for Adaptive Systems, Boston University, 1 I 1 Cummington Street, Room 244, Boston, MA 02215. limb movements. These spatial representations are expressed in both head-centered coordinates and bodycentered coordinates since the eyes move within the head, whereas the head, arms, and legs move with respect to the body. This paper describes a model for learning an invariant head-centered representation of 3-D target position. A model for learning an invariant body-centered representation of 3-D target position will be described elsewhere (Guenther, Bullock, Greve, &

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Neural representations for sensory-motor control, I: Head-centered 3-D target positions from opponent eye commands.

This article describes how corollary discharges from outflow eye movement commands can be transformed by two stages of opponent neural processing into a head-centered representation of 3-D target position. This representation implicitly defines a cyclopean coordinate system whose variables approximate the binocular vergence and spherical horizontal and vertical angles with respect to the observ...

متن کامل

A new model of spatial representations multimodal brain areas . . In

Most models of spatial representations in the cortex assume cells with limited receptive fields that are defined in a particular egocentric frame of reference. However, cells outside of primary sensory cortex are either gain modulated by postural input or partially shifting. We show that solving classical spatial tasks, like sensory prediction, multi-sensory integration, sensory-motor transform...

متن کامل

Computations for geometrically accurate visually guided reaching in 3-D space.

A fundamental question in neuroscience is how the brain transforms visual signals into accurate three-dimensional (3-D) reach commands, but surprisingly this has never been formally modeled. Here, we developed such a model and tested its predictions experimentally in humans. Our visuomotor transformation model used visual information about current hand and desired target positions to compute th...

متن کامل

A Distributed Population Mechanism for the 3-D Oculomotor Reference Frame Transformation. Running Head: Neural Network Model for 3-D saccades

Human saccades require a non-linear, eye orientation-dependent reference frame transformation in order to transform visual codes to the motor commands for eye muscles. Primate neurophysiology suggests that this transformation is performed between the superior colliculus and brainstem burst neurons, but provides little clues as to how this is done. To understand how the brain might accomplish th...

متن کامل

A Dedicated Binding Mechanism for the Visual Control of Movement

The human motor system is remarkably proficient in the online control of visually guided movements, adjusting to changes in the visual scene within 100 ms [1-3]. This is achieved through a set of highly automatic processes [4] translating visual information into representations suitable for motor control [5, 6]. For this to be accomplished, visual information pertaining to target and hand need ...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:
  • Neural Networks

دوره 6  شماره 

صفحات  -

تاریخ انتشار 1993